Goto

Collaborating Authors

 poisson noise




A Machine Learning-Driven Solution for Denoising Inertial Confinement Fusion Images

Akkus, Asya Y., Wolfe, Bradley T., Chu, Pinghan, Huang, Chengkun, Campbell, Chris S., Alvarez, Mariana Alvarado, Volegov, Petr, Fittinghoff, David, Reinovsky, Robert, Wang, Zhehui

arXiv.org Artificial Intelligence

Neutron imaging is essential for diagnosing and optimizing inertial confinement fusion implosions at the National Ignition Facility. Due to the required 10-micrometer resolution, however, neutron image require image reconstruction using iterative algorithms. For low-yield sources, the images may be degraded by various types of noise. Gaussian and Poisson noise often coexist within one image, obscuring fine details and blurring the edges where the source information is encoded. Traditional denoising techniques, such as filtering and thresholding, can inadvertently alter critical features or reshape the noise statistics, potentially impacting the ultimate fidelity of the iterative image reconstruction pipeline. However, recent advances in synthetic data production and machine learning have opened new opportunities to address these challenges. In this study, we present an unsupervised autoencoder with a Cohen-Daubechies- Feauveau (CDF 97) wavelet transform in the latent space, designed to suppress for mixed Gaussian-Poisson noise while preserving essential image features. The network successfully denoises neutron imaging data. Benchmarking against both simulated and experimental NIF datasets demonstrates that our approach achieves lower reconstruction error and superior edge preservation compared to conventional filtering methods such as Block-matching and 3D filtering (BM3D). By validating the effectiveness of unsupervised learning for denoising neutron images, this study establishes a critical first step towards fully AI-driven, end-to-end reconstruction frameworks for ICF diagnostics.


Bregman geometry-aware split Gibbs sampling for Bayesian Poisson inverse problems

Faye, Elhadji Cisse, Fall, Mame Diarra, Dobigeon, Nicolas, Barat, Eric

arXiv.org Machine Learning

This paper proposes a novel Bayesian framework for solving Poisson inverse problems by devising a Monte Carlo sampling algorithm which accounts for the underlying non-Euclidean geometry. To address the challenges posed by the Poisson likelihood -- such as non-Lipschitz gradients and positivity constraints -- we derive a Bayesian model which leverages exact and asymptotically exact data augmentations. In particular, the augmented model incorporates two sets of splitting variables both derived through a Bregman divergence based on the Burg entropy. Interestingly the resulting augmented posterior distribution is characterized by conditional distributions which benefit from natural conjugacy properties and preserve the intrinsic geometry of the latent and splitting variables. This allows for efficient sampling via Gibbs steps, which can be performed explicitly for all conditionals, except the one incorporating the regularization potential. For this latter, we resort to a Hessian Riemannian Langevin Monte Carlo (HRLMC) algorithm which is well suited to handle priors with explicit or easily computable score functions. By operating on a mirror manifold, this Langevin step ensures that the sampling satisfies the positivity constraints and more accurately reflects the underlying problem structure. Performance results obtained on denoising, deblurring, and positron emission tomography (PET) experiments demonstrate that the method achieves competitive performance in terms of reconstruction quality compared to optimization- and sampling-based approaches.


Learning Cocoercive Conservative Denoisers via Helmholtz Decomposition for Poisson Inverse Problems

Wei, Deliang, Chen, Peng, Xu, Haobo, Yao, Jiale, Li, Fang, Zeng, Tieyong

arXiv.org Artificial Intelligence

Plug-and-play (PnP) methods with deep denoisers have shown impressive results in imaging problems. They typically require strong convexity or smoothness of the fidelity term and a (residual) non-expansive denoiser for convergence. These assumptions, however, are violated in Poisson inverse problems, and non-expansiveness can hinder denoising performance. To address these challenges, we propose a cocoercive conservative (CoCo) denoiser, which may be (residual) expansive, leading to improved denoising. By leveraging the generalized Helmholtz decomposition, we introduce a novel training strategy that combines Hamiltonian regularization to promote conservativeness and spectral regularization to ensure cocoerciveness. We prove that CoCo denoiser is a proximal operator of a weakly convex function, enabling a restoration model with an implicit weakly convex prior. The global convergence of PnP methods to a stationary point of this restoration model is established. Extensive experimental results demonstrate that our approach outperforms closely related methods in both visual quality and quantitative metrics.


High-Quality Self-Supervised Deep Image Denoising

Samuli Laine, Tero Karras, Jaakko Lehtinen, Timo Aila

Neural Information Processing Systems

We describe a novel method for training high-quality image denoising models based on unorganized collections of corrupted images. The training does not need access to clean reference images, or explicit pairs of corrupted images, and can thus be applied in situations where such data is unacceptably expensive or impossible to acquire. We build on a recent technique that removes the need for reference data by employing networks with a "blind spot" in the receptive field, and significantly improve two key aspects: image quality and training efficiency. Our result quality is on par with state-of-the-art neural network denoisers in the case of i.i.d.




Patch-based learning of adaptive Total Variation parameter maps for blind image denoising

Fantasia, Claudio, Calatroni, Luca, Descombes, Xavier, Rekik, Rim

arXiv.org Artificial Intelligence

We consider a patch-based learning approach defined in terms of neural networks to estimate spatially adaptive regularisation parameter maps for image denoising with weighted Total Variation and test it to situations when the noise distribution is unknown. As an example, we consider situations where noise could be either Gaussian or Poisson and perform preliminary model selection by a standard binary classification network. Then, we define a patch-based approach where at each image pixel an optimal weighting between TV regularisation and the corresponding data fidelity is learned in a supervised way using reference natural image patches upon optimisation of SSIM and in a sliding window fashion. Extensive numerical results are reported for both noise models, showing significant improvement w.r.t. results obtained by means of optimal scalar regularisation.


Reviews: Efficient Neural Codes under Metabolic Constraints

Neural Information Processing Systems

The authors derive optimal monotonic tuning functions under metabolic constraints by reformulating the problem as a constraint optimization problem, which apparently can be solved in closed form. Overall, the paper is of very high quality, but it is too dense and covers too much material for a 8 page NIPS paper. Before I make some more specific comments, I would urge the authors to focus on the single neuron/pair of neuron cases for this paper and leave the population analysis for a later publication or a extended journal version. The population part is only sketched in the paper and I am not quite sure if I understand the results and implications (also, there is no figure for this part, which doesn't help understanding it better). Specific comments: Equations 3 and 4: I am not sure I understand how equation 4 is a special case of equation 3. Figures 1-3: I find those figures very hard to parse.